Ultra-fine entity typing (UFET) predicts extremely free-formed types (e.g., president, politician) of a given entity mention (e.g., Joe Biden) in context. State-of-the-art (SOTA) methods use the cross-encoder (CE) based architecture. CE concatenates the mention (and its context) with each type and feeds the pairs into a pretrained language model (PLM) to score their relevance. It brings deeper interaction between mention and types to reach better performance but has to perform N (type set size) forward passes to infer types of a single mention. CE is therefore very slow in inference when the type set is large (e.g., N = 10k for UFET). To this end, we propose to perform entity typing in a recall-expand-filter manner. The recall and expand stages prune the large type set and generate K (K is typically less than 256) most relevant type candidates for each mention. At the filter stage, we use a novel model called MCCE to concurrently encode and score these K candidates in only one forward pass to obtain the final type prediction. We investigate different variants of MCCE and extensive experiments show that MCCE under our paradigm reaches SOTA performance on ultra-fine entity typing and is thousands of times faster than the cross-encoder. We also found MCCE is very effective in fine-grained (130 types) and coarse-grained (9 types) entity typing. Our code is available at \url{https://github.com/modelscope/AdaSeq/tree/master/examples/MCCE}.
translated by 谷歌翻译
Metric-based meta-learning is one of the de facto standards in few-shot learning. It composes of representation learning and metrics calculation designs. Previous works construct class representations in different ways, varying from mean output embedding to covariance and distributions. However, using embeddings in space lacks expressivity and cannot capture class information robustly, while statistical complex modeling poses difficulty to metric designs. In this work, we use tensor fields (``areas'') to model classes from the geometrical perspective for few-shot learning. We present a simple and effective method, dubbed hypersphere prototypes (HyperProto), where class information is represented by hyperspheres with dynamic sizes with two sets of learnable parameters: the hypersphere's center and the radius. Extending from points to areas, hyperspheres are much more expressive than embeddings. Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere. Following this idea, we also develop two variants of prototypes under other measurements. Extensive experiments and analysis on few-shot learning tasks across NLP and CV and comparison with 20+ competitive baselines demonstrate the effectiveness of our approach.
translated by 谷歌翻译
链接的语音实体旨在识别和消除语言中的命名实体。常规方法严重遭受了不受限制的语音样式和ASR系统产生的嘈杂笔录。在本文中,我们提出了一种名为“知识增强命名实体识别”(KENER)的新颖方法,该方法致力于通过在实体识别阶段无痛地纳入适当的知识来改善鲁棒性,从而改善实体联系的整体性能。肯纳(Kener)首先检索未提及的句子的候选实体,然后利用实体描述作为额外的信息来帮助识别提及。当输入短或嘈杂时,由密集检索模块检索的候选实体特别有用。此外,我们研究了各种数据采样策略和设计有效的损失功能,以提高识别和歧义阶段中检索实体的质量。最后,将与过滤模块的链接作为最终保障措施应用,从而可以过滤出错误认可的提及。我们的系统在NLPCC-2022共享任务2的轨道1中获得第一名,并在轨道1中获得第一名。
translated by 谷歌翻译
开放的领域知识库非常重要。它通常是从百科全书网站中提取的,并广泛用于知识检索系统,问答系统或推荐系统。实际上,关键的挑战是保持最新的知识库。与从百科全书转储中获取所有数据的笨拙获取所有数据不同,可以在避免无效提取的同时扩大知识库的新鲜度,而当前的知识库更新方法通常确定是否需要通过构建预测模型来更新实体。但是,这些方法只能在某些特定字段中定义,由于数据源和数据结构的问题,结果证明是显而易见的偏差。对于开放域知识,用户的查询意图通常是多种多样的,因此我们构建了一个主题感知的图形网络,用于根据用户查询日志进行知识更新。我们的方法可以总结如下:1。通过用户的日志提取实体,然后选择它们作为种子2.刮擦百科全书网站中种子实体的属性,并为每个实体的自我监督构造实体属性图。 3.使用实体属性图来训练GNN实体更新模型,以确定是否需要同步该实体。 4.根据最小编辑时间算法,使用百科全书知识与知识库中的实体匹配和更新过滤的实体。
translated by 谷歌翻译
成功的基于机器学习的命名实体识别模型可能会因某些特殊领域的文本而失败,例如中文地址和电子商务标题,需要足够的背景知识。对于人类注释者来说,此类文本也很难。实际上,我们可以从具有一些共同实体的相关文本中获得一些潜在的有用信息,以帮助文本理解。然后,人们可以通过引用相关样本来轻松地提出正确的答案。在本文中,我们建议使用相关样品增强NER模型。我们通过大规模内域未标记的数据从稀疏的BM25检索器中绘制相关样品。为了明确模拟人类推理过程,我们执行了通过多数投票校准的无培训实体类型。为了捕获训练阶段的相关特征,我们建议通过基于变压器的多构度跨编码器对相关样品进行建模。上述两个域数据集的经验结果显示了我们方法的功效。
translated by 谷歌翻译
文本分类在许多实际应用中起着重要作用。在现实世界中,数据集非常小。大多数现有方法采用预训练的神经网络模型来处理这种数据集。但是,这些方法要么很难在移动设备上部署,因此它们的输出尺寸较大,或者无法完全提取短语和条款之间的深层语义信息。本文提出了一个基于多模型的深度学习框架,用于使用不平衡且极其小的数据集,用于短文本多类分类。我们的框架主要包括五层:编码器层使用Distilbert获得上下文敏感的动态词向量,这些词向量很难在传统的功能工程方法中表示。由于该层的变压器部分是蒸馏的,因此我们的框架被压缩。然后,我们使用接下来的两层提取深层语义信息。编码器层的输出发送到双向LSTM网络,并以单词和句子级别的LSTM层次提取特征矩阵,以获得细粒的语义表示。之后,最大式层将特征矩阵转换为较低维矩阵,仅保留明显的特征。最后,将特征矩阵视为完全连接的软磁层的输入,该输入包含一个可以将预测的线性向量转换为输出值的函数,作为每个分类中文本的概率。对两个公共基准测试的广泛实验证明了我们提出的方法对极小的数据集的有效性。它在精确,召回,准确性和F1得分方面保留最先进的基线性能,以及通过模型大小,训练时间和收敛时期,我们可以得出结论,可以更快,更轻松地部署我们的方法在移动设备上。
translated by 谷歌翻译
Continually learning to segment more and more types of image regions is a desired capability for many intelligent systems. However, such continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning. While multiple knowledge distillation strategies originally for continual classification have been well adapted to continual semantic segmentation, they only consider transferring old knowledge based on the outputs from one or more layers of deep fully convolutional networks. Different from existing solutions, this study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements (Eg. pixels or small local regions) within each image which can capture both within-class and between-class knowledge. The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model. Considering that pixels belonging to the same class in each image often share similar visual properties, a class-specific region pooling is applied to provide more efficient relationship information for knowledge transfer. Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
translated by 谷歌翻译
Multiconer共享的任务旨在检测在多种语言的简短和低文本设置中,在语义上模棱两可且复杂的命名实体。缺乏上下文使人们对歧义的命名实体的认识充满挑战。为了减轻此问题,我们的团队Damo-NLP提出了一个基于知识的系统,我们在其中建立了基于Wikipedia的多语言知识基础,以向指定的实体识别(NER)模型提供相关的上下文信息。给定输入句子,我们的系统有效地从知识库中检索了相关上下文。然后,将原始输入句子加强此类上下文信息,从而可以捕获明显更好的上下文化令牌表示。我们的系统在Multiconer共享任务中赢得了13个曲目中的10个。
translated by 谷歌翻译
作为解决复杂优化问题的有效算法,人造蜜蜂菌落(ABC)算法表明竞争,但与其他基于人口的算法相同,它难以平衡整个解决方案空间中全球搜索的能力(命名作为探索)和快速搜索定义为剥削的本地解决方案空间。为了提高ABC的性能,引入了自适应组协作ABC(AGABC)算法,其中不同阶段的群体划分为特定的组,并且分配给成员的不同能力的不同搜索策略,以及成员或策略获得最佳解决方案将采用进一步搜索。基准函数的实验结果表明,具有动态机制的提议算法优于其他搜索精度和稳定性的算法。此外,数值实验表明,该方法可以为复杂调度问题产生最佳解决方案。
translated by 谷歌翻译
Entity alignment is to find identical entities in different knowledge graphs (KGs) that refer to the same real-world object. Embedding-based entity alignment techniques have been drawing a lot of attention recently because they can help solve the issue of symbolic heterogeneity in different KGs. However, in this paper, we show that the progress made in the past was due to biased and unchallenging evaluation. We highlight two major flaws in existing datasets that favor embedding-based entity alignment techniques, i.e., the isomorphic graph structures in relation triples and the weak heterogeneity in attribute triples. Towards a critical evaluation of embedding-based entity alignment methods, we construct a new dataset with heterogeneous relations and attributes based on event-centric KGs. We conduct extensive experiments to evaluate existing popular methods, and find that they fail to achieve promising performance. As a new approach to this difficult problem, we propose a time-aware literal encoder for entity alignment. The dataset and source code are publicly available to foster future research. Our work calls for more effective and practical embedding-based solutions to entity alignment.
translated by 谷歌翻译